Introduction
Get introduced to the importance of metrics for gauging your team's performance.
“When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.” —Lord Kelvin
Metrics are a commonly-used tool to help gauge the performance of a team, and are frequently the tool of choice for engineering managers who are first looking for clear, objective(-ish) ways to judge the performance of an individual or a team. However, as with most tools in software development, over-reliance on metrics can have the exact opposite effect than desired, and in many cases can lead to the very undesired behavior they were trying to protect against.
In a previous section, I talked about the need to set clear expectations, and it is easy to look at this discussion of metrics and numerical measurements and conclude that the best expectations are those rooted in metrics. But metrics, while useful in some contexts, carry with them some inherent dangers that trap the unwary software development manager. Just as the thermometer can tell us how hot it is, but not how to change the temperature, metrics can give us some indication of the results of efforts, but not what to do to change those metrics.
Even so, metrics are a useful tool, because it’s easy to debate what “good code” looks like, or what “good communication” entails, or what “familiarity” with a domain means. Too often as managers, we put vague descriptions into our expectations, and then wrestle with whether somebody has achieved that—or worse, we debate with our employee about what these things look like, because the employee believes they have demonstrated “expertise” and we don’t. Metrics are numbers, and while we can debate the interpretation of the numbers, or the cause for the numbers, we can’t usually argue with the numbers themselves. As Lord Kelvin put it above, when we have numbers, “We know something about it.”
As we get started down this path of thinking about metrics, cadence becomes another point of consideration. Do we take snapshots every month? Every week? Every day? Agile methodologies have this same dilemma. “How long should a sprint run” is essentially a synonym for “how often should the most important metric—delivery to the customer—be captured.” If we capture too frequently, we run the risk of being caught up in micro-trends; if we capture too rarely, we run the risk of not seeing the peaks and troughs in between the data points. It’s hard to offer concrete advice for every metric, but as a general rule of thumb it’s better to gather more, but interpret less often. Think about the stock market: we can get data up to the second on the price of a stock, but often it’s better to wait days or even weeks before acting on a trend to see whether it is holding up.
The cadence of the change in metrics is also important to consider: In some organizations, the metrics being tracked change on a monthly basis, while in others, the metrics in use are the same ones in use back at the turn of the century. Both of these are problematic approaches. Never changing the metric means either the business has never changed (highly doubtful) or ossification has set in organizationally and the metric has lost its effect. Constantly changing the metrics under examination means there is no consistency by which to measure the deltas between metric periods. It’s always important to make sure the metrics being tracked support the goals of the measurement (in this case, performance management), so change them when the goals change, and leave them alone when the goals are still the same as they were.
Investigate, Review, Reflect, Act
The Misuse and Proper Use of Metrics